11 research outputs found

    Multimodal Assessment of Cognitive Decline: Applications in Alzheimer’s Disease and Depression

    Get PDF
    The initial diagnosis and assessment of cognitive decline are generally based around the judgement of clinicians, and commonly used semi-structured interviews, guided by pre-determined sets of topics, in a clinical set-up. Publicly available multimodal datasets have provided an opportunity to explore a range of experiments in the automatic detecting of cognitive decline. Drawing on the latest developments in representation learning, machine learning, and natural language processing, we seek to develop models capable of identifying cognitive decline with an eye to discovering the differences and commonalities that should be considered in computational treatment of mental health disorders. We present models that learn the indicators of cognitive decline from audio and visual modalities as well as lexical, syntactic, disfluency and pause information. Our study is carried out in two parts: moderation analysis and predictive modelling. We do some experiments with different fusion techniques. Our approaches are motivated by some of the recent efforts in multimodal fusion for classifying cognitive states to capture the interaction between modalities and maximise the use and combination of each modality. We create tools for detecting cognitive decline and use them to analyze three major datasets containing speech produced by people with and without cognitive decline. These findings are being used to develop multimodal models for the detection of depression and Alzheimer’s dementia

    Alzheimer's Dementia Recognition Using Acoustic, Lexical, Disfluency and Speech Pause Features Robust to Noisy Inputs

    Get PDF
    INTERSPEECH 2021. arXiv admin note: substantial text overlap with arXiv:2106.09668INTERSPEECH 2021. arXiv admin note: substantial text overlap with arXiv:2106.09668INTERSPEECH 2021. arXiv admin note: substantial text overlap with arXiv:2106.09668We present two multimodal fusion-based deep learning models that consume ASR transcribed speech and acoustic data simultaneously to classify whether a speaker in a structured diagnostic task has Alzheimer's Disease and to what degree, evaluating the ADReSSo challenge 2021 data. Our best model, a BiLSTM with highway layers using words, word probabilities, disfluency features, pause information, and a variety of acoustic features, achieves an accuracy of 84% and RSME error prediction of 4.26 on MMSE cognitive scores. While predicting cognitive decline is more challenging, our models show improvement using the multimodal approach and word probabilities, disfluency and pause information over word-only models. We show considerable gains for AD classification using multimodal fusion and gating, which can effectively deal with noisy inputs from acoustic features and ASR hypotheses

    Privacy-aware early detection of COVID-19 through adversarial training

    No full text
    Early detection of COVID-19 is an ongoing area of research that can help with triage, monitoring and general health assessment of potential patients and may reduce operational strain on hospitals that cope with the coronavirus pandemic. Different machine learning techniques have been used in the literature to detect potential cases of coronavirus using routine clinical data (blood tests, and vital signs measurements). Data breaches and information leakage when using these models can bring reputational damage and cause legal issues for hospitals. In spite of this, protecting healthcare models against leakage of potentially sensitive information is an understudied research area. In this study, two machine learning techniques that aim to predict a patient’s COVID-19 status are examined. Using adversarial training, robust deep learning architectures are explored with the aim to protect attributes related to demographic information about the patients. The two models examined in this work are intended to preserve sensitive information against adversarial attacks and information leakage. In a series of experiments using datasets from the Oxford University Hospitals (OUH), Bedfordshire Hospitals NHS Foundation Trust (BH), University Hospitals Birmingham NHS Foundation Trust (UHB), and Portsmouth Hospitals University NHS Trust (PUH), two neural networks are trained and evaluated. These networks predict PCR test results using information from basic laboratory blood tests, and vital signs collected from a patient upon arrival to the hospital. The level of privacy each one of the models can provide is assessed and the efficacy and robustness of the proposed architectures are compared with a relevant baseline. One of the main contributions in this work is the particular focus on the development of effective COVID19 detection models with built-in mechanisms in order to selectively protect sensitive attributes against adversarial attacks. The results on hold-out test set and external validation confirmed that there was no impact on the generalisibility of the model using adversarial learning

    Multi-Modal Fusion with Gating Using Audio, Lexical and Disfluency Features for Alzheimer's Dementia Recognition from Spontaneous Speech.

    No full text
    This paper is a submission to the Alzheimer's Dementia Recognition through Spontaneous Speech (ADReSS) challenge, which aims to develop methods that can assist in the automated prediction of severity of Alzheimer's Disease from speech data. We focus on acoustic and natural language features for cognitive impairment detection in spontaneous speech in the context of Alzheimer's Disease Diagnosis and the mini-mental state examination (MMSE) score prediction. We proposed a model that obtains unimodal decisions from different LSTMs, one for each modality of text and audio, and then combines them using a gating mechanism for the final prediction. We focused on sequential modelling of text and audio and investigated whether the disfluencies present in individuals' speech relate to the extent of their cognitive impairment. Our results show that the proposed classification and regression schemes obtain very promising results on both development and test sets. This suggests Alzheimer's Disease can be detected successfully with sequence modeling of the speech data of medical sessions

    Alzheimer’s Dementia Recognition From Spontaneous Speech Using Disfluency and Interactional Features

    No full text
    Alzheimer’s disease (AD) is a progressive, neurodegenerative disorder mainly characterized by memory loss with deficits in other cognitive domains, including language, visuospatial abilities, and changes in behavior. Detecting diagnostic biomarkers that are noninvasive and cost-effective is of great value not only for clinical assessments and diagnostics but also for research purposes. Several previous studies have investigated AD diagnosis via the acoustic, lexical, syntactic, and semantic aspects of speech and language. Other studies include approaches from conversation analysis that look at more interactional aspects, showing that disfluencies such as fillers and repairs, and purely nonverbal features such as inter-speaker silence, can be key features of AD conversations. These kinds of features, if useful for diagnosis, may have many advantages: They are simple to extract and relatively language-, topic-, and task-independent. This study aims to quantify the role and contribution of these features of interaction structure in predicting whether a dialogue participant has AD. We used a subset of the Carolinas Conversation Collection dataset of patients with AD at moderate stage within the age range 60–89 and similar-aged non-AD patients with other health conditions. Our feature analysis comprised two sets: disfluency features, including indicators such as self-repairs and fillers, and interactional features, including overlaps, turn-taking behavior, and distributions of different types of silence both within patient speech and between patient and interviewer speech. Statistical analysis showed significant differences between AD and non-AD groups for several disfluency features (edit terms, verbatim repeats, and substitutions) and interactional features (lapses, gaps, attributable silences, turn switches per minute, standardized phonation time, and turn length). For the classification of AD patient conversations vs. non-AD patient conversations, we achieved 83% accuracy with disfluency features, 83% accuracy with interactional features, and an overall accuracy of 90% when combining both feature sets using support vector machine classifiers. The discriminative power of these features, perhaps combined with more conventional linguistic features, therefore shows potential for integration into noninvasive clinical assessments for AD at advanced stages

    Detecting depression with word-level multimodal fusion

    No full text
    Copyright © 2019 ISCA Semi-structured clinical interviews are frequently used diagnostic tools for identifying depression during an assessment phase. In addition to the lexical content of a patient's responses, multimodal cues concurrent with the responses are indicators of their motor and cognitive state, including those derivable from their voice quality and gestural behaviour. In this paper, we use information from different modalities in order to train a classifier capable of detecting the binary state of a subject (clinically depressed or not), as well as the level of their depression. We propose a model that is able to perform modality fusion incrementally after each word in an utterance using a time-dependent recurrent approach in a deep learning set-up. To mitigate noisy modalities, we utilize fusion gates that control the degree to which the audio or visual modality contributes to the final prediction. Our results show the effectiveness of word-level multimodal fusion, achieving state-of-the-art results in depression detection and outperforming early feature-level and late fusion techniques

    Real-world evaluation of AI driven COVID-19 triage for emergency admissions: External validation & operational assessment of lab-free and high-throughput screening solutions

    No full text
    Background Uncertainty in patients' COVID-19 status contributes to treatment delays, nosocomial transmission, and operational pressures in hospitals. However, the typical turnaround time for laboratory PCR remains 12–24 h and lateral flow devices (LFDs) have limited sensitivity. Previously, we have shown that artificial intelligence-driven triage (CURIAL-1.0) can provide rapid COVID-19 screening using clinical data routinely available within 1 h of arrival to hospital. Here, we aimed to improve the time from arrival to the emergency department to the availability of a result, do external and prospective validation, and deploy a novel laboratory-free screening tool in a UK emergency department. Methods We optimised our previous model, removing less informative predictors to improve generalisability and speed, developing the CURIAL-Lab model with vital signs and readily available blood tests (full blood count [FBC]; urea, creatinine, and electrolytes; liver function tests; and C-reactive protein) and the CURIAL-Rapide model with vital signs and FBC alone. Models were validated externally for emergency admissions to University Hospitals Birmingham, Bedfordshire Hospitals, and Portsmouth Hospitals University National Health Service (NHS) trusts, and prospectively at Oxford University Hospitals, by comparison with PCR testing. Next, we compared model performance directly against LFDs and evaluated a combined pathway that triaged patients who had either a positive CURIAL model result or a positive LFD to a COVID-19-suspected clinical area. Lastly, we deployed CURIAL-Rapide alongside an approved point-of-care FBC analyser to provide laboratory-free COVID-19 screening at the John Radcliffe Hospital (Oxford, UK). Our primary improvement outcome was time-to-result, and our performance measures were sensitivity, specificity, positive and negative predictive values, and area under receiver operating characteristic curve (AUROC). Findings 72 223 patients met eligibility criteria across the four validating hospital groups, in a total validation period spanning Dec 1, 2019, to March 31, 2021. CURIAL-Lab and CURIAL-Rapide performed consistently across trusts (AUROC range 0·858–0·881, 95% CI 0·838–0·912, for CURIAL-Lab and 0·836–0·854, 0·814–0·889, for CURIAL-Rapide), achieving highest sensitivity at Portsmouth Hospitals (84·1%, Wilson's 95% CI 82·5–85·7, for CURIAL-Lab and 83·5%, 81·8–85·1, for CURIAL-Rapide) at specificities of 71·3% (70·9–71·8) for CURIAL-Lab and 63·6% (63·1–64·1) for CURIAL-Rapide. When combined with LFDs, model predictions improved triage sensitivity from 56·9% (51·7–62·0) for LFDs alone to 85·6% with CURIAL-Lab (81·6–88·9; AUROC 0·925) and 88·2% with CURIAL-Rapide (84·4–91·1; AUROC 0·919), thereby reducing missed COVID-19 cases by 65% with CURIAL-Lab and 72% with CURIAL-Rapide. For the prospective deployment of CURIAL-Rapide, 520 patients were enrolled for point-of-care FBC analysis between Feb 18 and May 10, 2021, of whom 436 received confirmatory PCR testing and ten (2·3%) tested positive. Median time from arrival to a CURIAL-Rapide result was 45 min (IQR 32–64), 16 min (26·3%) sooner than with LFDs (61 min, 37–99; log-rank p<0·0001), and 6 h 52 min (90·2%) sooner than with PCR (7 h 37 min, 6 h 5 min to 15 h 39 min; p<0·0001). Classification performance was high, with sensitivity of 87·5% (95% CI 52·9–97·8), specificity of 85·4% (81·3–88·7), and negative predictive value of 99·7% (98·2–99·9). CURIAL-Rapide correctly excluded infection for 31 (58·5%) of 53 patients who were triaged by a physician to a COVID-19-suspected area but went on to test negative by PCR. Interpretation Our findings show the generalisability, performance, and real-world operational benefits of artificial intelligence-driven screening for COVID-19 over standard-of-care in emergency departments. CURIAL-Rapide provided rapid, laboratory-free screening when used with near-patient FBC analysis, and was able to reduce the number of patients who tested negative for COVID-19 but were triaged to COVID-19-suspected areas
    corecore